167 research outputs found

    Optimal Decomposition Strategy For Tree Edit Distance

    Get PDF
    An ordered labeled tree is a tree where the left-to-right order among siblings is significant. Given two ordered labeled trees, the edit distance between them is the minimum cost edit operations that convert one tree to the other. In this thesis, we present an algorithm for the tree edit distance problem by using the optimal tree decomposition strategy. By combining the vertical compression of trees with optimal decomposition we can significantly reduce the running time of the algorithm. We compare our method with other methods both theoretically and experimentally. The test results show that our strategies on compressed trees are by far the best decomposition strategy, creating the least number of relevant sub-problems

    A device-level characterization approach to quantify the impacts of different random variation sources in FinFET technology

    Get PDF
    A simple device-level characterization approach to quantitatively evaluate the impacts of different random variation sources in FinFETs is proposed. The impacts of random dopant fluctuation are negligible for FinFETs with lightly doped channel, leaving metal gate granularity and line-edge roughness as the two major random variation sources. The variations of Vth induced by these two major categories are theoretically decomposed based on the distinction in physical mechanisms and their influences on different electrical characteristics. The effectiveness of the proposed method is confirmed through both TCAD simulations and experimental results. This letter can provide helpful guidelines for variation-aware technology development

    A Unified PTAS for Prize Collecting TSP and Steiner Tree Problem in Doubling Metrics

    Get PDF
    We present a unified (randomized) polynomial-time approximation scheme (PTAS) for the prize collecting traveling salesman problem (PCTSP) and the prize collecting Steiner tree problem (PCSTP) in doubling metrics. Given a metric space and a penalty function on a subset of points known as terminals, a solution is a subgraph on points in the metric space, whose cost is the weight of its edges plus the penalty due to terminals not covered by the subgraph. Under our unified framework, the solution subgraph needs to be Eulerian for PCTSP, while it needs to be a tree for PCSTP. Before our work, even a QPTAS for the problems in doubling metrics is not known. Our unified PTAS is based on the previous dynamic programming frameworks proposed in [Talwar STOC 2004] and [Bartal, Gottlieb, Krauthgamer STOC 2012]. However, since it is unknown which part of the optimal cost is due to edge lengths and which part is due to penalties of uncovered terminals, we need to develop new techniques to apply previous divide-and-conquer strategies and sparse instance decompositions

    Streaming Euclidean Max-Cut: Dimension vs Data Reduction

    Full text link
    Max-Cut is a fundamental problem that has been studied extensively in various settings. We design an algorithm for Euclidean Max-Cut, where the input is a set of points in Rd\mathbb{R}^d, in the model of dynamic geometric streams, where the input X[Δ]dX\subseteq [\Delta]^d is presented as a sequence of point insertions and deletions. Previously, Frahling and Sohler [STOC 2005] designed a (1+ϵ)(1+\epsilon)-approximation algorithm for the low-dimensional regime, i.e., it uses space exp(d)\exp(d). To tackle this problem in the high-dimensional regime, which is of growing interest, one must improve the dependence on the dimension dd, ideally to space complexity poly(ϵ1dlogΔ)\mathrm{poly}(\epsilon^{-1} d \log\Delta). Lammersen, Sidiropoulos, and Sohler [WADS 2009] proved that Euclidean Max-Cut admits dimension reduction with target dimension d=poly(ϵ1)d' = \mathrm{poly}(\epsilon^{-1}). Combining this with the aforementioned algorithm that uses space exp(d)\exp(d'), they obtain an algorithm whose overall space complexity is indeed polynomial in dd, but unfortunately exponential in ϵ1\epsilon^{-1}. We devise an alternative approach of \emph{data reduction}, based on importance sampling, and achieve space bound poly(ϵ1dlogΔ)\mathrm{poly}(\epsilon^{-1} d \log\Delta), which is exponentially better (in ϵ\epsilon) than the dimension-reduction approach. To implement this scheme in the streaming model, we employ a randomly-shifted quadtree to construct a tree embedding. While this is a well-known method, a key feature of our algorithm is that the embedding's distortion O(dlogΔ)O(d\log\Delta) affects only the space complexity, and the approximation ratio remains 1+ϵ1+\epsilon

    Optimal Sizing of On-Board Energy Storage Devices for Electrified Railway Systems

    Get PDF

    Coresets for Clustering with General Assignment Constraints

    Full text link
    Designing small-sized \emph{coresets}, which approximately preserve the costs of the solutions for large datasets, has been an important research direction for the past decade. We consider coreset construction for a variety of general constrained clustering problems. We significantly extend and generalize the results of a very recent paper (Braverman et al., FOCS'22), by demonstrating that the idea of hierarchical uniform sampling (Chen, SICOMP'09; Braverman et al., FOCS'22) can be applied to efficiently construct coresets for a very general class of constrained clustering problems with general assignment constraints, including capacity constraints on cluster centers, and assignment structure constraints for data points (modeled by a convex body B)\mathcal{B}). Our main theorem shows that a small-sized ϵ\epsilon-coreset exists as long as a complexity measure Lip(B)\mathsf{Lip}(\mathcal{B}) of the structure constraint, and the \emph{covering exponent} Λϵ(X)\Lambda_\epsilon(\mathcal{X}) for metric space (X,d)(\mathcal{X},d) are bounded. The complexity measure Lip(B)\mathsf{Lip}(\mathcal{B}) for convex body B\mathcal{B} is the Lipschitz constant of a certain transportation problem constrained in B\mathcal{B}, called \emph{optimal assignment transportation problem}. We prove nontrivial upper bounds of Lip(B)\mathsf{Lip}(\mathcal{B}) for various polytopes, including the general matroid basis polytopes, and laminar matroid polytopes (with better bound). As an application of our general theorem, we construct the first coreset for the fault-tolerant clustering problem (with or without capacity upper/lower bound) for the above metric spaces, in which the fault-tolerance requirement is captured by a uniform matroid basis polytope

    Facile method to synthesize magnetic iron oxides/TiO2 hybrid nanoparticles and their photodegradation application of methylene blue

    Get PDF
    Many methods have been reported to improving the photocatalytic efficiency of organic pollutant and their reliable applications. In this work, we propose a facile pathway to prepare three different types of magnetic iron oxides/TiO2 hybrid nanoparticles (NPs) by seed-mediated method. The hybrid NPs are composed of spindle, hollow, and ultrafine iron oxide NPs as seeds and 3-aminopropyltriethyloxysilane as linker between the magnetic cores and TiO2 layers, respectively. The composite structure and the presence of the iron oxide and titania phase have been confirmed by transmission electron microscopy, X-ray diffraction, and X-ray photoelectron spectra. The hybrid NPs show good magnetic response, which can get together under an external applied magnetic field and hence they should become promising magnetic recovery catalysts (MRCs). Photocatalytic ability examination of the magnetic hybrid NPs was carried out in methylene blue (MB) solutions illuminated under Hg light in a photochemical reactor. About 50% to 60% of MB was decomposed in 90 min in the presence of magnetic hybrid NPs. The synthesized magnetic hybrid NPs display high photocatalytic efficiency and will find recoverable potential applications in cleaning polluted water with the help of magnetic separation
    corecore